26 research outputs found

    Point triangulation through polyhedron collapse using the l∞ norm

    Get PDF
    Multi-camera triangulation of feature points based on a minimisation of the overall l(2) reprojection error can get stuck in suboptimal local minima or require slow global optimisation. For this reason, researchers have proposed optimising the l(infinity) norm of the l(2) single view reprojection errors, which avoids the problem of local minima entirely. In this paper we present a novel method for l(infinity) triangulation that minimizes the l(infinity) norm of the l(infinity) reprojection errors: this apparently small difference leads to a much faster but equally accurate solution which is related to the MLE under the assumption of uniform noise. The proposed method adopts a new optimisation strategy based on solving simple quadratic equations. This stands in contrast with the fastest existing methods, which solve a sequence of more complex auxiliary Linear Programming or Second Order Cone Problems. The proposed algorithm performs well: for triangulation, it achieves the same accuracy as existing techniques while executing faster and being straightforward to implement

    From light rays to 3D models

    Get PDF

    Variational multi-image stereo matching

    Get PDF
    In two-view stereo matching, the disparity of occluded pixels cannot accurately be estimated directly: it needs to be inferred through, e.g., regularisation. When capturing scenes using a plenoptic camera or a camera dolly on a track, more than two input images are available, and - contrary to the two-view case -pixels in the central view will only very rarely be occluded in all of the other views. By explicitly handling occlusions, we can limit the depth estimation of pixel (P) over right arrow to only use those cameras that actually observe (p) over right arrow. We do this by extending variational stereo matching to multiple views, and by explicitly handling occlusion on a view-by-view basis. Resulting depth maps are illustrated to be sharper and less noisy than typical recent techniques working on light fields

    Image restoration using deep learning

    Get PDF
    We propose a new image restoration method that reduces noise and blur in degraded images. In contrast to many state of the art methods, our method does not rely on intensive iterative approaches, instead it uses a pre-trained convolutional neural network

    Line-constrained camera location estimation in multi-image stereomatching

    Get PDF
    Stereomatching is an effective way of acquiring dense depth information from a scene when active measurements are not possible. So-called lightfield methods take a snapshot from many camera locations along a defined trajectory (usually uniformly linear or on a regular grid—we will assume a linear trajectory) and use this information to compute accurate depth estimates. However, they require the locations for each of the snapshots to be known: the disparity of an object between images is related to both the distance of the camera to the object and the distance between the camera positions for both images. Existing solutions use sparse feature matching for camera location estimation. In this paper, we propose a novel method that uses dense correspondences to do the same, leveraging an existing depth estimation framework to also yield the camera locations along the line. We illustrate the effectiveness of the proposed technique for camera location estimation both visually for the rectification of epipolar plane images and quantitatively with its effect on the resulting depth estimation. Our proposed approach yields a valid alternative for sparse techniques, while still being executed in a reasonable time on a graphics card due to its highly parallelizable nature

    3D reconstruction of maize plants in the phenoVision system

    Get PDF
    In order to efficiently study the impact of environmental changes, or the differences between various genotypes, large numbers of plants need to be measured. At the VIB, a system named \emph{PhenoVision} was built to automatically image plants during their growth. This system is used to evaluate the impact of drought on different maize genotypes. To this end, we require 3D reconstructions of the maize plants, which we obtain from voxel carving

    GPU-based maize plant analysis: accelerating CNN segmentation and voxel carving

    Get PDF
    PHENOVISION is a high-throughput plant phenotyping system for crop plants in greenhouse conditions. A conveyor belt transports plants between automated irrigation stations and imaging cabins. The aim is to phenotype maize varieties grown under different conditions. To this end we model the plants in 3D and automate the measuring of the plants

    Machine learning for maize plant segmentation

    Get PDF
    High-throughput plant phenotyping platforms produce immense volumes of image data. Here, a binary segmentation of maize colour images is required for 3D reconstruction of plant structure and measurement of growth traits. To this end, we employ a convolutional neural network (CNN) to perform this segmentation successfully

    Fast and robust variational optical flow for high-resolution images using SLIC superpixels

    Get PDF
    We show how pixel-based methods can be applied to a sparse image representation resulting from a superpixel segmentation. On this sparse image representation we only estimate a single motion vector per superpixel, without working on the full-resolution image. This allows the accelerated processing of high-resolution content with existing methods. The use of superpixels in optical flow estimation was studied before, but existing methods typically estimate a dense optical flow field - one motion vector per pixel - using the full-resolution input, which can be slow. Our novel approach offers important speed-ups compared to dense pixel-based methods, without significant loss of accuracy
    corecore